作为一种相对较新的运动形式,电子竞技提供了无与伦比的数据可用性。尽管游戏发动机生成的大量数据,但提取它们并验证其完整性以实用和科学用途的目的是具有挑战性的。我们的工作旨在通过提供来自Starcraft II电子竞技锦标赛的原始文件和预处理的文件来向更广泛的科学界开放电子竞技。这些文件可用于统计和机器学习建模任务,并与各种基于实验室的测量(例如行为测试,脑成像)相关。我们已经收集了公开可用的游戏发动机,生成了比赛的“重播”,并使用低级应用程序编程界面(API)Parser库进行了数据提取和清理。此外,我们开源并发布了在创建数据集过程中开发的所有自定义工具。这些工具包括Pytorch和Pytorch Lightning API抽象来加载和建模数据。我们的数据集包含自2016年以来的主要和Premiere Starcraft II锦标赛的重播。为了准备数据集,我们处理了55个锦标赛的“ replaypacks”,其中包含17930个带有游戏状态信息的文件。根据对可用的星际争霸II数据集的初步调查,我们观察到我们的数据集是其出版物后最大的星际争霸II电子竞技数据的最大可用来源。对提取数据的分析有望在各种受监督和自我监督的任务中进行进一步的人工智能(AI),机器学习(ML),心理学,人工互动(HCI)和与运动有关的研究。
translated by 谷歌翻译
In this paper, we propose a new short-term load forecasting (STLF) model based on contextually enhanced hybrid and hierarchical architecture combining exponential smoothing (ES) and a recurrent neural network (RNN). The model is composed of two simultaneously trained tracks: the context track and the main track. The context track introduces additional information to the main track. It is extracted from representative series and dynamically modulated to adjust to the individual series forecasted by the main track. The RNN architecture consists of multiple recurrent layers stacked with hierarchical dilations and equipped with recently proposed attentive dilated recurrent cells. These cells enable the model to capture short-term, long-term and seasonal dependencies across time series as well as to weight dynamically the input information. The model produces both point forecasts and predictive intervals. The experimental part of the work performed on 35 forecasting problems shows that the proposed model outperforms in terms of accuracy its predecessor as well as standard statistical models and state-of-the-art machine learning models.
translated by 谷歌翻译
Tools of Topological Data Analysis provide stable summaries encapsulating the shape of the considered data. Persistent homology, the most standard and well studied data summary, suffers a number of limitations; its computations are hard to distribute, it is hard to generalize to multifiltrations and is computationally prohibitive for big data-sets. In this paper we study the concept of Euler Characteristics Curves, for one parameter filtrations and Euler Characteristic Profiles, for multi-parameter filtrations. While being a weaker invariant in one dimension, we show that Euler Characteristic based approaches do not possess some handicaps of persistent homology; we show efficient algorithms to compute them in a distributed way, their generalization to multifiltrations and practical applicability for big data problems. In addition we show that the Euler Curves and Profiles enjoys certain type of stability which makes them robust tool in data analysis. Lastly, to show their practical applicability, multiple use-cases are considered.
translated by 谷歌翻译
Hierarchical decomposition of control is unavoidable in large dynamical systems. In reinforcement learning (RL), it is usually solved with subgoals defined at higher policy levels and achieved at lower policy levels. Reaching these goals can take a substantial amount of time, during which it is not verified whether they are still worth pursuing. However, due to the randomness of the environment, these goals may become obsolete. In this paper, we address this gap in the state-of-the-art approaches and propose a method in which the validity of higher-level actions (thus lower-level goals) is constantly verified at the higher level. If the actions, i.e. lower level goals, become inadequate, they are replaced by more appropriate ones. This way we combine the advantages of hierarchical RL, which is fast training, and flat RL, which is immediate reactivity. We study our approach experimentally on seven benchmark environments.
translated by 谷歌翻译
In this work, we propose the novel Prototypical Graph Regression Self-explainable Trees (ProGReST) model, which combines prototype learning, soft decision trees, and Graph Neural Networks. In contrast to other works, our model can be used to address various challenging tasks, including compound property prediction. In ProGReST, the rationale is obtained along with prediction due to the model's built-in interpretability. Additionally, we introduce a new graph prototype projection to accelerate model training. Finally, we evaluate PRoGReST on a wide range of chemical datasets for molecular property prediction and perform in-depth analysis with chemical experts to evaluate obtained interpretations. Our method achieves competitive results against state-of-the-art methods.
translated by 谷歌翻译
我们提出了三种新型的修剪技术,以提高推理意识到的可区分神经结构搜索(DNAS)的成本和结果。首先,我们介绍了DNA的随机双路构建块,它可以通过内存和计算复杂性在内部隐藏尺寸上进行搜索。其次,我们在搜索过程中提出了一种在超级网的随机层中修剪块的算法。第三,我们描述了一种在搜索过程中修剪不必要的随机层的新技术。由搜索产生的优化模型称为Prunet,并在Imagenet Top-1图像分类精度的推理潜伏期中为NVIDIA V100建立了新的最先进的Pareto边界。将Prunet作为骨架还优于COCO对象检测任务的GPUNET和EFIDENENET,相对于平均平均精度(MAP)。
translated by 谷歌翻译
积极和未标记的学习是一个重要的问题,在许多应用中自然出现。几乎所有现有方法的显着局限性在于假设倾向得分函数是恒定的(疤痕假设),这在许多实际情况下都是不现实的。避免这种假设,我们将参数方法考虑到后验概率和倾向得分功能的关节估计问题。我们表明,在轻度假设下,当两个函数具有相同的参数形式(例如,具有不同参数的逻辑)时,相应的参数是可识别的。在此激励的情况下,我们提出了两种估计方法:关节最大似然法和第二种方法基于两种Fisher一致表达式的交替实现。我们的实验结果表明,所提出的方法比基于预期最大化方案的现有方法可比性或更好。
translated by 谷歌翻译
有效的强化学习需要适当的平衡探索和剥削,由动作分布的分散定义。但是,这种平衡取决于任务,学习过程的当前阶段以及当前的环境状态。指定动作分布分散的现有方法需要依赖问题的超参数。在本文中,我们建议使用以下原则自动指定动作分布分布:该分布应具有足够的分散,以评估未来的政策。为此,应调整色散以确保重播缓冲区中的动作和产生它们的分布模式的足够高的概率(密度),但是这种分散不应更高。这样,可以根据缓冲区中的动作有效评估策略,但是当此策略收敛时,动作的探索性随机性会降低。上述原则在挑战性的基准蚂蚁,Halfcheetah,Hopper和Walker2D上进行了验证,并取得了良好的效果。我们的方法使动作标准偏差收敛到与试验和错误优化产生的相似的值。
translated by 谷歌翻译
深度学习和统计数据中的许多关键问题是由变异差距引起的,即证据和证据下限(ELBO)之间的差异。结果,在经典的VAE模型中,我们仅获得对数可能的下限,因为Elbo被用作成本函数,因此我们无法比较模型之间的对数可能性。在本文中,我们提出了变化差距的一般有效的上限,这使我们能够有效估计真实的证据。我们提供了对拟议方法的广泛理论研究。此外,我们表明,通过应用我们的估计,我们可以轻松地获得VAE模型的对数模型的下限和上限。
translated by 谷歌翻译
我们引入了一个新的培训范式,该范围对神经网络参数空间进行间隔约束以控制遗忘。当代持续学习(CL)方法从一系列数据流有效地培训神经网络,同时减少灾难性遗忘的负面影响,但它们不能提供任何确保的确保网络性能不会随着时间的流逝而无法控制地恶化。在这项工作中,我们展示了如何通过将模型的持续学习作为其参数空间的持续收缩来遗忘。为此,我们提出了Hypertrectangle训练,这是一种新的训练方法,其中每个任务都由参数空间中的超矩形表示,完全包含在先前任务的超矩形中。这种配方将NP-HARD CL问题降低到多项式时间,同时提供了完全防止遗忘的弹性。我们通过开发Intercontinet(间隔持续学习)算法来验证我们的主张,该算法利用间隔算术来有效地将参数区域建模为高矩形。通过实验结果,我们表明我们的方法在不连续的学习设置中表现良好,而无需存储以前的任务中的数据。
translated by 谷歌翻译